[cli] externalDocuments implemented, with typescript-operations support#10659
Conversation
🦋 Changeset detectedLatest commit: d25dfc0 The changes in this PR will be included in the next version bump. This PR includes changesets to release 6 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
c88a311 to
3005df7
Compare
| throw error; | ||
| const readOnlyHash = JSON.stringify(readOnlyPointerMap); | ||
| const outputDocumentsReadOnly = await cache( | ||
| 'documents', |
There was a problem hiding this comment.
A readonly document is also just a document for hashing purposes.
This ensures there is only one document in the final merged array, because we use the hash to detect the file uniqueness
|
|
||
| async loadDocuments(pointer: Types.OperationDocument[]): Promise<Types.DocumentFile[]> { | ||
| async loadDocuments( | ||
| pointer: UnnormalizedTypeDefPointer, |
There was a problem hiding this comment.
The only caller of this function will be passing in UnnormalizedTypeDefPointer, and nothing else. May as well narrow the type to make it easier to code.
| function hashDocument(doc: Types.DocumentFile) { | ||
| if (doc.rawSDL) { | ||
| return hashContent(doc.rawSDL); | ||
| } | ||
| async function addHashToDocumentFiles( | ||
| documentFilesPromise: Promise<Source[]>, | ||
| type: 'standard' | 'read-only', | ||
| ): Promise<Types.DocumentFile[]> { | ||
| function hashDocument(doc: Source) { | ||
| if (doc.rawSDL) { | ||
| return hashContent(doc.rawSDL); | ||
| } | ||
|
|
||
| if (doc.document) { | ||
| return hashContent(print(doc.document)); | ||
| } | ||
| if (doc.document) { | ||
| return hashContent(print(doc.document)); | ||
| } | ||
|
|
||
| return null; | ||
| } |
There was a problem hiding this comment.
hashDocument is is only used by one function, so I moved it inline to avoid jumping around.
| documentPointers: UnnormalizedTypeDefPointer | UnnormalizedTypeDefPointer[], | ||
| config: Types.Config, | ||
| ): Promise<Types.DocumentFile[]> { | ||
| ): Promise<Source[]> { |
There was a problem hiding this comment.
When this function finishes, we'd only get the base Source. Types.DocumentFile is what we get after processing the Source (adding hash and type)
| ); | ||
|
|
||
| return newDocuments.map((document, index) => ({ | ||
| ...documents[index], |
There was a problem hiding this comment.
Context for future me:
Previously we throw away hash when optimising the doc.
When we introduce type (read-only or standard), we need it for further processing, so we need to keep it like this.
Maybe we should not just add these custom Codegen meta directly to the node, since optimizeDocuments implementation may remove it in the future.
Maybe we can add a special key/value to reduce this risk e.g. __meta: { hash: 'abc', types: 'standard' } 🤔
| hash: string; | ||
| type: 'standard' | 'read-only'; |
There was a problem hiding this comment.
Source is the default from parsing the docs.
DocumentFile is the "processed" source, with Codegen metadata.
We may want to group them to avoid edge cases mentioned in: https://github.com/dotansimha/graphql-code-generator/pull/10659/changes#r3032421307
8f4ba67 to
df25ba7
Compare
There was a problem hiding this comment.
Documents now have type being 'standard' | 'read-only'.
If we don't supply that, the plugin will generate nothing.
This utility helps by creating Types.DocumentFile with hash and type hardcoded. If we need to make the params partial with override in the future, we could,.
40f5ca8 to
819cf71
Compare
| const populateDocumentPointerMap = ( | ||
| allDocumentsDenormalizedPointers: Types.OperationDocument[], | ||
| ): UnnormalizedTypeDefPointer => { | ||
| const pointer: UnnormalizedTypeDefPointer = {}; | ||
| for (const denormalizedPtr of allDocumentsDenormalizedPointers) { | ||
| if (typeof denormalizedPtr === 'string') { | ||
| pointer[denormalizedPtr] = {}; | ||
| } else if (typeof denormalizedPtr === 'object') { | ||
| Object.assign(pointer, denormalizedPtr); | ||
| } | ||
| } | ||
| return pointer; | ||
| }; |
There was a problem hiding this comment.
Inlining populateDocumentPointerMap makes it a bit easier to read as we don't jump around the file. We could take this pure function and put it at the bottom of the file if needed.
| processedFile[file.hash] = true; | ||
|
|
||
| return prev; | ||
| }, []); |
There was a problem hiding this comment.
Looks like the reduce accumulator prev is never used — outputDocuments.push(file) writes to an external array. The return value of the entire reduce is discarded.
It works, by chance, but this is confusing. Maybe a plain for...of loop would be cleaner?
Additionally, if a document has hash: null (which hashDocument() can return when rawSDL and document are both absent), processedFile[null] will be "null" — meaning all null-hash documents get deduped down to 1, which may be wrong.
There was a problem hiding this comment.
Looks like the reduce accumulator prev is never used — outputDocuments.push(file) writes to an external array. The return value of the entire reduce is discarded.
It works, by chance, but this is confusing. Maybe a plain for...of loop would be cleaner?
That's true! I think I was trying to reduce the result to outputDocuments, but it's accessible inside the loop anyways. I've made it for...of in 7ec7e06
Additionally, if a document has hash: null (which hashDocument() can return when rawSDL and document are both absent), processedFile[null] will be "null" — meaning all null-hash documents get deduped down to 1, which may be wrong.
Huh, that's true, the typecheck in this repo is not strict enough. Something I'll need to configure after the major release.
I wonder when null would happen though 🤔 When Codegen is parsing an empty file?
| const processedFile: Record<string, true> = {}; | ||
| const mergedDocuments = [ | ||
| ...outputDocumentsStandard, | ||
| ...outputDocumentsReadOnly, | ||
| ]; | ||
| for (const file of mergedDocuments) { | ||
| if (processedFile[file.hash]) { | ||
| continue; | ||
| } | ||
| }); | ||
|
|
||
| outputDocuments = result.documents; | ||
| outputDocuments.push(file); | ||
| processedFile[file.hash] = true; | ||
| } |
There was a problem hiding this comment.
I've considered forwarding documentsReadOnly as-is but hit a few annoying cases:
- documents are processed a lot after the initial loading stage here. If we kept documentsReadOnly separate, we'd need to run the same functions over them
- Plugins use positional param, which means
documentsReadOnlyneed to be at the last position to avoid a breaking change. This works, but feels disconnected
For this reason, I feel it's easier if we merge them as early as possible, and run the same processing over both types, and plugins like typescript-operations can handle the filtering without having to access another param
typescript-operations support
a2dd1f3 to
f35cb96
Compare
- `externalDocuments` is used for as references during code generation. Said documents are often fragments that need to be inlined to the documents types/ast. However, the process should not generate types/ast for docs in externalDocuments. - Add changeset - Update generated files
f35cb96 to
d25dfc0
Compare
Description
documentsReadOnlyimplemented, for safe handling of fragment includes between packages in the monorepoRelated # (10658)[https://github.com//issues/10658]
Type of change
How Has This Been Tested?